10 research outputs found

    Automated competitive analysis of real time scheduling with graph games

    Get PDF
    This paper is devoted to automatic competitive analysis of real-time scheduling algorithms for firm-deadline tasksets, where only completed tasks con- tribute some utility to the system. Given such a taskset T , the competitive ratio of an on-line scheduling algorithm A for T is the worst-case utility ratio of A over the utility achieved by a clairvoyant algorithm. We leverage the theory of quantitative graph games to address the competitive analysis and competitive synthesis problems. For the competitive analysis case, given any taskset T and any finite-memory on- line scheduling algorithm A , we show that the competitive ratio of A in T can be computed in polynomial time in the size of the state space of A . Our approach is flexible as it also provides ways to model meaningful constraints on the released task sequences that determine the competitive ratio. We provide an experimental study of many well-known on-line scheduling algorithms, which demonstrates the feasibility of our competitive analysis approach that effectively replaces human ingenuity (required Preliminary versions of this paper have appeared in Chatterjee et al. ( 2013 , 2014 ). B Andreas Pavlogiannis [email protected] Krishnendu Chatterjee [email protected] Alexander Kößler [email protected] Ulrich Schmid [email protected] 1 IST Austria (Institute of Science and Technology Austria), Am Campus 1, 3400 Klosterneuburg, Austria 2 Embedded Computing Systems Group, Vienna University of Technology, Treitlstrasse 3, 1040 Vienna, Austria 123 Real-Time Syst for finding worst-case scenarios) by computing power. For the competitive synthesis case, we are just given a taskset T , and the goal is to automatically synthesize an opti- mal on-line scheduling algorithm A , i.e., one that guarantees the largest competitive ratio possible for T . We show how the competitive synthesis problem can be reduced to a two-player graph game with partial information, and establish that the compu- tational complexity of solving this game is Np -complete. The competitive synthesis problem is hence in Np in the size of the state space of the non-deterministic labeled transition system encoding the taskset. Overall, the proposed framework assists in the selection of suitable scheduling algorithms for a given taskset, which is in fact the most common situation in real-time systems design

    Simulation-Based Identification of Operating Point Range for a Novel Laser-Sintering Machine for Additive Manufacturing of Continuous Carbon-Fibre-Reinforced Polymer Parts

    Get PDF
    Additive manufacturing using continuous carbon-fibre-reinforced polymer (CCFRP) presents an opportunity to create high-strength parts suitable for aerospace, engineering, and other industries. Continuous fibres reinforce the load-bearing path, enhancing the mechanical properties of these parts. However, the existing additive manufacturing processes for CCFRP parts have numerous disadvantages. Resin- and extrusion-based processes require time-consuming and costly post-processing to remove the support structures, severely restricting the design flexibility. Additionally, the production of small batches demands considerable effort. In contrast, laser sintering has emerged as a promising alternative in industry. It enables the creation of robust parts without needing support structures, offering efficiency and cost-effectiveness in producing single units or small batches. Utilising an innovative laser-sintering machine equipped with automated continuous fibre integration, this study aims to merge the benefits of laser-sintering technology with the advantages of continuous fibres. The paper provides an outline, using a finite element model in COMSOL Multiphysics, for simulating and identifying an optimised operating point range for the automated integration of continuous fibres. The results demonstrate a remarkable reduction in processing time of 233% for the fibre integration and a reduction of 56% for the width and 44% for the depth of the heat-affected zone compared to the initial setup

    The effect of forgetting on the performance of a synchronizer

    Get PDF
    AbstractWe study variants of the α-synchronizer by Awerbuch (1985) within a distributed message passing system with probabilistic message loss. The purpose of a synchronizer is to maintain a virtual (lock-step) round structure, which simplifies the design of higher-level distributed algorithms. The underlying idea of an α-synchronizer is to let processes continuously exchange round numbers and to allow a process to proceed to the next round only after it has witnessed that all processes have already started the current round.In this work, we study the performance of several synchronizers in an environment with probabilistic message loss. In particular, we analyze how different strategies of forgetting affect the round durations. The synchronizer variants considered differ in the times when processes discard part of their accumulated knowledge during the execution. Possible applications can be found, e.g., in sensor fusion, where sensor data become outdated and thus invalid after a certain amount of time.For all synchronizer variants considered, we develop corresponding Markov chain models and quantify the performance degradation using both analytic approaches and Monte-Carlo simulations. Our results allow to explicitly calculate the asymptotic behavior of the round durations: While in systems with very reliable communication the effect of forgetting is negligible, the effect is more profound in systems with less reliable communication. Our study thus provides computationally efficient bounds on the performance of the (non-forgetting) α-synchronizer and allows to quantitatively assess the effect accumulated knowledge has on the performance

    Brief Announcement: The Degrading Effect of Forgetting on a Synchronizer

    Get PDF
    International audienceA strategy to increase an algorithm's robustness against internal memory corruption is to let processes actively discard part of their accumulated knowledge during execution. We study how different strategies of forgetting affect the performance of a synchronizer in an environment with probabilistic message loss

    The Effect of Forgetting on the Performance of a Synchronizer

    Get PDF
    International audienceWe study variants of the α -synchronizer by Awerbuch (J. ACM, 1985) within a distributed message passing system with probabilistic message loss. The purpose of synchronizers is to maintain a virtual (discrete) round structure. Their idea essentially is to let processes continuously exchange round numbers and to allow a process to proceed to the next round only after it has witnessed that all processes have already started its own current round. In this work, we study how four different, naturally chosen, strategies of forgetting affect the performance of these synchronizers. The variants differ in the times when processes discard part of their accumulated knowledge during execution. Such actively forgetting synchronizers have applications, e.g., in sensor fusion where sensor data becomes outdated and thus invalid after a certain amount of time. We give analytical formulas to quantify the degradation of the synchronizers' performance in an environment with probabilistic message loss. In particular, the formulas allow to explicitly calculate the performance's asymptotic behavior. Interestingly, all considered synchronizer variants behave similarly in systems with low message loss, while one variant shows fundamentally different behavior from the remaining three in systems with high message loss. The theoretical results are backed up by Monte-Carlo simulations

    Real-Time Performance Analysis of Synchronous Distributed Systems

    No full text
    Abweichender Titel laut Übersetzung der Verfasserin/des VerfassersZsfassung in dt. SpracheEine der zentralen Fragen der Informatik ist, wie lange ein Algorithmus braucht, um seine Aufgabe zu erfüllen. Während diese Frage in (zentralisierten) sequenziellen Algorithmen bereits ausgiebig untersucht wurde, sind in verteilten Algorithmen noch viele Fragen zur Zeitkomplexität offen. In synchronen rundenbasierenden verteilten Algorithmen stellt die Rundenanzahl bis zur Termination ein zur klassischen Zeitkomplexität analoges Maß dar, jedoch werden dabei viele Feinheiten unter den Teppich gekehrt, die zum Beispiel durch Fehler, das Kommunikationssystem oder Ungewissheiten der zeitlichen Abfolgen durch Asynchronität während einer Runde auftreten. Die vorliegende Arbeit stellt eine neuartige Analyse verteilter Systeme vor, die mehrere unabhängige verteilte Algorithmen nebenläufig auf einer gemeinsamen Kommunikationsinfrastruktur ausführen. Dabei wird besonders auf die bereits angesprochenen Ungewissheiten der zeitlichen Abfolgen eingegangen und die Leistungsfähigkeit der Algorithmen in Bezug auf das Echtzeitverhalten analysiert. Das Hauptaugenmerk liegt dabei auf den Interna einer Runde, konkret dem Senden und Empfangen von Nachrichten, die mit verschiedenen mathematischen Werkzeugen analysiert werden. Um den Zugriff der Algorithmen auf die gemeinsam verwendeten Kommunikationskanäle zu steuern, wird ein Echtzeitscheduler verwendet. Geeignete Fehlertoleranz-Mechanismen erlauben es, verloren gegangene Nachrichten in synchronen verteilten Algorithmen zu tolerieren. Daher können Nachrichten als Arbeitspakete aufgefasst werden, die als Bearbeitungsfrist das Ende der Runde haben. Ein guter Echtzeitscheduler versucht, den kumulativen Ertrag zu maximieren, den er durch die fristgerechte Bearbeitung der Pakete erhält. Seine Konkurrenzfähigkeit wird üblicherweise in Relation zu einem optimalen hellseherischen Scheduler gemessen, der in die Zukunft sehen kann. Diese Arbeit legt den Grundstein für eine neue Methode zur automatischen Analyse der Konkurrenzfähigkeit von Schedulingalgorithmen. Dabei wird das Problem bei vorgegebenen Arbeitspakettypen auf das Finden von Kreisen mit minimalem durchschnittlichen Gewicht in einem Graphen reduziert. Weiters, wenngleich mit sehr großem Rechenaufwand verbunden, kann durch algorithmische Spieltheorie ein optimaler Schedulingalgorithmus synthetisiert werden. Auf der Empfängerseite kommt ein Synchronisationsalgorithmus zum Einsatz, der vom Scheduler verworfene gegangenen Nachrichten implizit kompensiert und wieder eine konsistente Rundenstruktur herstellt. Das Zeitverhalten eines darauf aufbauenden verteilten Algorithmus hängt stark von der Leistungsfähigkeit dieses Synchronisationsalgorithmus ab. Eine Abstraktion der verworfenen (oder sonstwie verlorengegangenen) Nachrichten in einem probabilistischen Kommunikations-Fehlermodell ermöglicht die Anwendung von Markoff-Theorie zur Berechnung der zu erwarteten Rundendauer. Die Analyse der Folge der Startzeiten der von dem Synchronisieralgorithmus generierten Runden durch die Modellierung als Markoff-Kette liefert schlussendlich Erkenntnisse über die zu erwarteten Rundendauern.The time it takes for an algorithm to perform its task is a central question in computer science. It has been extensively studied for (centralized) sequential algorithms, while a comprehensive treatment of time complexity in the distributed setting is still lacking. In synchronous round-based distributed algorithms, the number of rounds until the problem is solved represents a performance measure analogous to standard time complexity (Newtonian real-time), but puts under the rug the many intricacies that occur in a round due to faults, message passing, timing uncertainties due to asynchrony etc. This thesis provides a novel framework for analyzing distributed systems running multiple independent distributed algorithms simultaneously on a shared message passing communication infrastructure. In particular, the previously mentioned timing uncertainties are addressed, and the performance of the executed algorithms are analyzed with respect to Newtonian real-time. To do so, the internals of a round, that is, transmitting and receiving messages, must be taken into account. We study these mechanisms with different mathematical tools. On the transmitter side, a real-time scheduler responsible for scheduling the messages of multiple algorithms on the shared communication channels. Since suitable fault-tolerance techniques allow to deal with dropped messages in a synchronous distributed algorithm, messages can be modeled as firm deadline jobs with the end of a round as their deadline. Obviously, a good scheduling algorithm maximizes the cumulative utility gained by the successfully scheduled jobs. The quality of a scheduler is usually characterized by its competitive factor, that is, the performance of the scheduler with respect to an optimal clairvoyant scheduler, that knows the future. This thesis lays the foundation for a new approach to automatically perform the competitive analysis of scheduling algorithms with respect to given task sets. This is done by using a reduction to the problem of finding minimum mean-weight-cycles in multi-objective graphs. In addition, albeit being computationally hard, algorithmic game theory also allows to synthesize optimal scheduling algorithms. On the receiver side, a synchronizer algorithm can be used to maintain a consistent round structure by compensating messages dropped by the scheduler. The performance of a distributed algorithm running atop of such a thus synchronizer directly depends on the performance of the synchronizer. Abstracting dropped (and otherwise lost) messages by a probabilistic link failure model allows to calculate the expected round duration using Markov theory. By analyzing the series of the starting times of the rounds generated by the synchronizer, and by modeling its execution as a Markov chain, this thesis finally develops results regarding the expected round duration.12

    Automated analysis of real-time scheduling using graph games

    No full text
    In this paper, we introduce the powerful framework of graph games for the analysis of real-time scheduling with firm deadlines. We introduce a novel instance of a partial-observation game that is suitable for this purpose, and prove decidability of all the involved decision problems. We derive a graph game that allows the automated computation of the competitive ratio (along with an optimal witness algorithm for the competitive ratio) and establish an NP-completeness proof for the graph game problem. For a given on-line algorithm, we present polynomial time solution for computing (i) the worst-case utility; (ii) the worst-case utility ratio w.r.t. a clairvoyant off-line algorithm; and (iii) the competitive ratio. A major strength of the proposed approach lies in its flexibility w.r.t. incorporating additional constraints on the adversary and/or the algorithm, including limited maximum or average load, finiteness of periods of overload, etc., which are easily added by means of additional instances of standard objective functions for graph games

    A framework for automated competitive analysis of on-line scheduling of firm-deadline tasks

    No full text
    We present a flexible framework for the automated competitive analysis of on-line scheduling algorithms for firm-deadline real-time tasks based on multi-objective graphs: Given a task set and an on-line scheduling algorithm specified as a labeled transition system, along with some optional safety, liveness, and/or limit-average constraints for the adversary, we automatically compute the competitive ratio of the algorithm w.r.t. A clairvoyant scheduler. We demonstrate the flexibility and power of our approach by comparing the competitive ratio of several on-line algorithms, including Dover, that have been proposed in the past, for various task sets. Our experimental results reveal that none of these algorithms is universally optimal, in the sense that there are task sets where other schedulers provide better performance. Our framework is hence a very useful design tool for selecting optimal algorithms for a given application
    corecore